跳到主要内容

部署安装 Calico

部署安装 1)确保Calico可以在主机上进行管理cali和tunl接口,如果主机上存在NetworkManage,请配置NetworkManager。

NetworkManager会为默认网络名称空间中的接口操纵路由表,在该默认名称空间中,固定了Calico veth对以连接到容器,这可能会干扰Calico代理正确路由的能力。

在以下位置创建以下配置文件,以防止NetworkManager干扰接口:

vim /etc/NetworkManager/conf.d/calico.conf [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl*

2)首先下载calica.yaml部署文件,然后更改 CALICO_IPV4POOL_IPIP 为 Never 使用 BGP 模式,

另外增加 IP_AUTODETECTION_METHOD 为 interface 使用匹配模式,默认是first-found模式,在复杂网络环境下还是有出错的可能。

复制代码 wget https://docs.projectcalico.org/manifests/calico.yaml

vim calico.yaml

# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# IP automatic detection
- name: IP_AUTODETECTION_METHOD
value: "interface=en.*"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Never"
复制代码

你会看到以下输出:

复制代码 configmap "calico-config" created customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "ipamblocks.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "blockaffinities.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "ipamhandles.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "networksets.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" created clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" created clusterrole.rbac.authorization.k8s.io "calico-node" created clusterrolebinding.rbac.authorization.k8s.io "calico-node" created daemonset.extensions "calico-node" created serviceaccount "calico-node" created deployment.extensions "calico-kube-controllers" created serviceaccount "calico-kube-controllers" created 复制代码

3)使用以下命令确认所有Pod正在运行。

watch kubectl get pods --all-namespaces 等到每个calico全部Running即可。

复制代码 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6ff88bf6d4-tgtzb 1/1 Running 0 2m45s kube-system calico-node-24h85 1/1 Running 0 2m43s kube-system calico-node-45k48 1/1 Running 0 2m43s kube-system coredns-846jhw23g9-9af73 1/1 Running 0 4m5s kube-system coredns-846jhw23g9-hmswk 1/1 Running 0 4m5s kube-system etcd-jbaker-1 1/1 Running 0 6m22s kube-system kube-apiserver-jbaker-1 1/1 Running 0 6m12s kube-system kube-controller-manager-jbaker-1 1/1 Running 0 6m16s kube-system kube-proxy-8fzp2 1/1 Running 0 5m16s kube-system kube-scheduler-jbaker-1 1/1 Running 0 5m41s 复制代码 按CTRL + C退出watch。

4)如果是切换网络插件,需要清理每个节点上之前残留的路由表和网桥,以避免和calico冲突。

ip link ip link delete flannel.1 ip route ip route delete 10.244.0.0/24 via 10.4.7.21 dev eth0 卸载其他网路插件之后,最好重启所有节点,这样系统会重置网卡规则,旧规则自动就会被清理了。

创建NetworkPolicy策略 [root@k8s001 network]# cat networkpolicy.yaml

# --------------------------------------------------------------------------------------------------------------------
# 说明:
# 注意我们目前是针对单网络进行的限制
# 1.如果涉及集群千兆+万兆网络模式,我们需要限制POD访问外部的存储(HDFS和CEPH)就是万兆网络的限制了
# 2.配置只能随环境进行修改
# 3.更换网络模式可能会涉及整个集群的震荡和数据的丢失,谨慎操作
# --------------------------------------------------------------------------------------------------------------------
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: yuhaohao
spec:
podSelector:
matchLabels:
component: nginx
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
egress:
# 开放集群内所有POD之间的相互访问
- to:
- ipBlock:
cidr: 172.20.0.0/16
# 开放POD访问10.16.153.x网段的6443端口,也就是说POD可以访问集群的api-server服务
- to:
- ipBlock:
cidr: 10.16.153.0/24
ports:
- protocol: TCP
port: 6443
# 开放POD访问10.16.153.x网段的8443端口,也就是说高可用模式下POD可以访问集群的api-server服务
- to:
- ipBlock:
cidr: 10.47.153.0/24
ports:
- protocol: TCP
port: 8443
[root@k8s001 network]# kubectl apply -f networkpolicy.yaml